11 September 2023

CS4OA

at the joint Sigdial-INLG 2023 conference

 

The 1st Workshop on Counter Speech for Online Abuse

 A workshop for creating, investigating and improving tools for producing and evaluating counter speech.

 

 

Overview

Hate speech and abusive and toxic language are prevalent in online spaces. For example, a 2019 survey shows that in the UK 30-40% of people have experienced online abuse, and platforms like Facebook bring down millions of harmful posts every year, with the help of AI tools. While removal of such content can immediately reduce the quantity of harmful messages, it can bring about accusations of censorship and may not be effective at curbing hate in the long term. An alternative approach is to reply with counter speech, i.e. targeted responses aimed at refuting the hateful language using thoughtful and cogent reasons, and fact-bound arguments. This has been shown to be effective in influencing the behaviour of both the perpetrators of abuse and bystanders that witness the interactions, as well as providing support to victims.

The sheer amount of social media data shared online on a daily basis means that hate mitigation, using counter speech, requires reliable, efficient and scalable tools. Recently, efforts have been made to curate hate countering datasets and automate the production of counter speech. However, this research field is still in its infancy, and many questions remain open regarding the most effective approaches and methods to take, as well as how to evaluate them.

This first multidisciplinary workshop aims to bring together researchers from diverse backgrounds such as computer science and the social sciences, as well as policy makers and other stakeholders to attempt to understand how counter speech is currently used to tackle abuse by individuals, activists and organisations, how Natural Language Processing (NLP) and Generation (NLG) can be applied to produce counter narratives, and the implications of using large language models for this task. It will also address, but not be limited to, the questions of how to evaluate and measure the impacts of counter speech, the importance of expert knowledge from civil society in the development of counter speech datasets and taxonomies, and how to ensure fairness and mitigate the biases present in language models when generating counter speech.